Here we will implement Householder Transformation to Perform QR Decomposition of a Matrix. We would then use it to obtain the Least Squares Solution, which is quite important in most practical situations where due to noise in the data we get inconsistent System of Equations but we can use this to obtain the closest solution.
CODE & RESULTS
We will be using Householder Matrices to bring about the QR Decomposition of the Matrix. We will at no step compute the Household Matrices Explicitly to save on space as well as computation. At each step we select a column and then depending on its position in the Matrix we transform it to a column with all elements below the pivot 0. Let’s take piece-wise what each part of our code does.
- The following code divides a vector by it’s \(L_2-norm\) to make it a unit vector.
# Making Unit vectors
unit <- function(v) {v/sqrt(sum(v*v))}
- The following code performs multiplication of a vector \(x\) with a Householder Matrix corresponding to a vector \(u\), where \(u\) is the unit vector \(\frac{x-y}{|x-y|}\), but we prevent calculating the matrix explicitly to save on space and computation. \[H = I - 2.u.u'\]
# Doing multiplication with Householder Matrix
hmult <- function(u, x) {x - 2*sum(u*x)*u}
- The following code finds out the \(u\) vector corresponding to a particular vector which sends that vector to a vector with same norm and with all elements except the first one as 0, when multiplied with the Householder Matrix corresponding to this shaver(\(u\)).
# Creating shaver for shaving off the cols to concentrate norm on top
shaver <- function(x){
x[1] <- x[1] - sqrt(sum(x*x))
unit(x)
}
- While carrying on the \(QR\)-Decomposition we will be using the original matrix’s space itself to store the \(Q\) and \(R\) matrices, where the extra diagonal elements of \(R\) will be stored in a separate array. This will help us efficiently use the space. Suppose we are given one such packed Matrix then the following codes produces the \(Q\) and \(R\) Matrices from that packed Matrix. We also use a function that can efficiently find out the resulting vector from multiplying \(Q\) with a vector, say \(x\).
# Computing R from the packed QR Decomposed Matrix
compute.R <- function(m,R.diag){
for (i in 1:ncol(m)) {
m[i:nrow(m),i] = numeric(nrow(m)-i+1)
m[i,i] = R.diag[i]
}
return(list("R" = m))
}
# Computing Qx when the packed QR Decomposed Matrix is passed along with x vector
multiply.Q <- function(m,x,t=1){ # t handles if we want to multiply x with Q? t = 1 implies Qx and t = 0 implies Q'x
res.vect <- x
ulist <- list()
if(t == 1)
k = ncol(m):1
else
k = 1:ncol(m)
for (i in k) {
u = m[i:nrow(m),i]
v <- numeric(nrow(m))
v[i:nrow(m)] = u
name <- paste("u",i,":",sep = "")
ulist[[name]] <- u
res.vect <- hmult(v,res.vect)
}
return(list("Qx"=res.vect, "ulist"=ulist))
}
# Computing Q from the packed QR Decomposed Matrix
compute.Q <- function(m){
e <- numeric(nrow(m))
e[1] = 1
A <- multiply.Q(m,e)
Q <- A$Qx
for (i in 2:nrow(m)){
e <- numeric(nrow(m))
e[i] = 1
Q <- rbind(Q,multiply.Q(m,e)$Qx)
}
return(list("Q" = t(Q), "U" = rev(A$ulist)))
}
- Now, the following code implements the \(QR\)-Decomposition. The rrefmatrix function can be found in the Gauss-Jordan Elimination Project or in the Source Code of this file.
# Efficient QR decomposition
Eff.QR <- function(mat){
R.diags = numeric(ncol(mat))
rref <- rrefmatrix(mat)
pivs <- rref$Pivs
if(rref$Rank != ncol(mat))
warning("Matrix is not Full Column Rank")
for (i in 1:ncol(mat)) {
if(i %in% pivs)
u = shaver(mat[i:nrow(mat),i])
else
u = rep(0, nrow(mat)-i+1)
for (j in i:ncol(mat)) {
mat[i:nrow(mat),j] <- hmult(u, mat[i:nrow(mat),j])
}
if(i %in% pivs)
R.diags[i] = mat[i,i]
else
R.diags[i] = 0
mat[i:nrow(mat),i] = u
}
computeQ <- compute.Q(mat)
return(list("Packed_Matrix" = mat, "R_Diag" = R.diags, "Q" = computeQ$Q, "R" = compute.R(mat, R.diags)$R, "U" = computeQ$U))
}
Let’s Try out the \(QR\)-Decomposition on a Matrix.
mat <- matrix(c(1,1,1,7,-2,2,3,-2,2,8,3,2,5,1,5,4,4,0,3,4), nrow = 5, ncol = 4)
kable(mat)
1 |
2 |
3 |
4 |
1 |
3 |
2 |
4 |
1 |
-2 |
5 |
0 |
7 |
2 |
1 |
3 |
-2 |
8 |
5 |
4 |
results <- Eff.QR(mat)
kable(results$Packed_Matrix, caption = "Packed Matrix")
Packed Matrix
-0.6581677 |
0.1336306 |
0.9354143 |
2.8062430 |
0.1015172 |
-0.5671608 |
4.7594119 |
6.2509655 |
0.1015172 |
-0.1637329 |
-0.1631587 |
1.6699700 |
0.7106201 |
0.3839700 |
0.7594389 |
-0.8554329 |
-0.2030343 |
0.7099910 |
0.6297871 |
-0.5179136 |
kable(results$R_Diag, caption = "R Diagonal Elements", col.names = "Diag")
R Diagonal Elements
7.483315 |
9.218576 |
6.361839 |
2.694741 |
kable(results$Q, caption = "Q Matrix")
Q Matrix
0.1336306 |
0.2150162 |
0.2910557 |
0.6660703 |
-0.6383948 |
0.1336306 |
0.3234928 |
0.0527150 |
0.5621412 |
0.7474715 |
0.1336306 |
-0.2188903 |
0.9300438 |
-0.2077637 |
0.1615012 |
0.9354143 |
0.2033937 |
-0.1325142 |
-0.2505294 |
-0.0574978 |
-0.2672612 |
0.8716872 |
0.1731075 |
-0.3666291 |
-0.0659534 |
kable(results$R, caption = "R Matrix")
R Matrix
7.483315 |
0.1336306 |
0.9354143 |
2.806243 |
0.000000 |
9.2185760 |
4.7594119 |
6.250966 |
0.000000 |
0.0000000 |
6.3618392 |
1.669970 |
0.000000 |
0.0000000 |
0.0000000 |
2.694741 |
0.000000 |
0.0000000 |
0.0000000 |
0.000000 |
kable(results$U, caption = "u vectors", col.names = "u")
u vectors
-0.6581677 |
0.1015172 |
0.1015172 |
0.7106201 |
-0.2030343 |
|
-0.5671608 |
-0.1637329 |
0.3839700 |
0.7099910 |
|
-0.1631587 |
0.7594389 |
0.6297871 |
|
|
To testify if it worked or not let’s see if \(Q\times R = mat\) and \(Q'\times Q = I\).
kable(results$Q %*% results$R, caption = "Original Matrix")
Original Matrix
1 |
2 |
3 |
4 |
1 |
3 |
2 |
4 |
1 |
-2 |
5 |
0 |
7 |
2 |
1 |
3 |
-2 |
8 |
5 |
4 |
kable(t(results$Q) %*% results$Q, caption = "Identity")
Identity
Q |
1 |
0 |
0 |
0 |
0 |
|
0 |
1 |
0 |
0 |
0 |
|
0 |
0 |
1 |
0 |
0 |
|
0 |
0 |
0 |
1 |
0 |
|
0 |
0 |
0 |
0 |
1 |
Indeed we get the original matrix and hence it seems the \(QR\)-Decomposition worked.
Now, returning to our original goal that is to use \(QR\)-Decomposition to find the Least Squares Solution for the Equation \(Ax = b\).
\[
Ax = b
\]
\[
QRx = b
\]
\[
Rx = Q'b
\]
Where we can solve the Last Equation by Back-Substitution to get the value of x.
- The following function implements the above idea.
# Finding Least Squares Solution
LS.solve <- function(A, b){
QR <- Eff.QR(A)
R1 <- QR$R[1:ncol(QR$R),]
c1 <- multiply.Q(QR$Packed_Matrix, b, t=0)$Qx[1:ncol(QR$R)]
# Backward Substitution
n <- nrow(R1)
x <- rep(0, n)
x[n] <- c1[n] / R1[n,n]
for (i in (n-1):1) {
x[i] <- (c1[i] - sum(R1[i, (i+1):n]*x[(i+1):n])) / R1[i,i]
}
return(list("x" = x))
}
Let’s try it out on a Matrix(\(A\)) and vector \(b\)
mat <- matrix(c(1,1,1,7,-2,2,3,-2,2,8,3,2,5,1,5,4,4,0,3,4), nrow = 5, ncol = 4)
kable(mat, caption = "A")
A
1 |
2 |
3 |
4 |
1 |
3 |
2 |
4 |
1 |
-2 |
5 |
0 |
7 |
2 |
1 |
3 |
-2 |
8 |
5 |
4 |
kable(c(1,2,3,4,5), caption = "b")
The Least Squares Solution is
kable(LS.solve(mat, c(1,2,3,4,5)))
0.5598919 |
0.6333617 |
0.7161303 |
-0.6190581 |
|